**Final Project Report** (#) Lejie LIU (f007gkf) (#) Motivational image
The theme "Reflections on Time" reminds me of the old-fashioned clock, which I created for my CS22 3D Modeling course, a nearly burnt-out candle, and some small objects that used to be displayed in my childhood home.(#) Implemented Features![]()
1. Extra Emitters: Point Light, Directional Light and SpotLight 2. Parellization Rendering with Nanothread 3. Moderate BSDF: Rough Conductor Material 4. Adaptive Sampling with Denoise 3.1 Implement basic NL-means Denoise Function 3.2 Implement joint NL-means Denoise Function with normal and albedo feature buffers. 3.3 Adaptive Sampling with Pixel Variance Estimates 3.4 Integrate Intel's Open Image Denoise 5. Homogeneous Scattering Participating Media(##) 1. Extra Emitters
Below lights has been implemented according to blender's lights: I implemented all lights as child of Surface class, which made them compatible with other area lights made of simple geometries or meshes when sampling lights. 1. Directional Light Code: src/surfaces/directional_light.cpp Parameters: + Angle: this determines the angular size of the light source. + Irradiance + Transform Validation: (Angle mainly contributes to the shadow softness)(##) 2. Parellization Rendering with Nanothread2. Point Light Code: src/surfaces/point_light.cpp Parameters: + Radius: determine the sampling size of the light source, affect the softness of shadow. + Power + Transform Validation:![]()
![]()
![]()
3. Spot Light Code: src/surfaces/spot_light.cpp Parameters: + Radius: same as point light + Power + cutoff angle + cutoff blur Spot lights are basically based on point lights. They have an additional calculation on light attenuation according to the output direction.![]()
![]()
![]()
![]()
Code: src/scene.cpp My code initializes multi-threading works according to different pixels. Nanothread is utilized. Render time without multi-threading: ``` Will save rendered image to "../../scenes/final_project/jensen_box_diffuse_mis-2024-11-25-19-06-37.png" Scene Rendering │█████████████████████████████████│ (2.442s) ``` Render time with multi-threading ``` Will save rendered image to "../../scenes/final_project/jensen_box_diffuse_mis-2024-11-25-19-13-09.png" Scene Rendering │█████████████████████████████████│ (544ms) ``` Using multi-threading largely reduce the rendering time, while remains the result nearly same.(##) 3. Rough Conductor Material Code: src/materials/conductor.cpp![]()
![]()
I implemented a conductor material with roughness control, where: + Used eta and k to computes the fresnel effect. + GGX normal distribution function is used to model surface microstructure. + Smith geometry term is implemented for visibility and masking effects + my version does not support anisotropic. My conductor material rendering results for various conductor materials: + Copper ("eta": [0.200438, 0.924033, 1.10221], "k": [3.91295, 2.45285, 2.14219])> (##) 4. Adaptive Sampling with Denoise+ Gold ("eta": [0.143119, 0.374957, 1.44248], "k": [3.98316, 2.38572, 1.60322])
+ Stainless ("eta": [1.65746, 0.880369, 0.521229], "k": [9.22387, 6.26952, 4.837])
Rendering results with different roughness:![]()
![]()
![]()
![]()
![]()
3.1 Implement basic NL-means Denoise Function Code: src/scene.cpp/denoise_nlm() I implemented a basic Non-local Means denoising function. It leverages self-similarity in the image while using a patch-based approach. Parameters: + Search Radius: the size of the region around p where candidates pixels q are selected. This is to balance the computational cost and accuracy. + Patch Radius: controls the size of the patch P(p), larger patches improve robustness but will blur small detail: + Filtering Strength Result of denoise:(#) Final Image3.2 Implement joint NL-means Denoise Function with normal and albedo feature buffers. Joint NLM extends the above standard NLM by incorporating additional feature data. In my project, I utilized normals and albedo, to guide the denoising process. Now the patch distances are weighted computed through three components: color, normal and albedo. Distance = color_weight * color_dist + normal_weight * normal_dist + albedo_weight * albedo_dist; Color Difference: length2(Color(A) - Color(B)) Normal Difference: 1 - cos_theta(A,B) Albedo Difference: length2(Albedo(A) - Albedo(B)) We use the combined distance to compute the final weight. This could help preserve more detail information when denoise. Comparison between NLM and NLM with feature buffers:![]()
![]()
![]()
It could be hard to find obvious improve for simple jensen box geometry.![]()
![]()
![]()
While, for a complex geometry like ajax statue, simple NLM could blur the geometry details , such as the hair and beard. By utilizing normal information, these details could be preserved by Joint NLM. 3.3 Adaptive Sampling with Pixel Variance Estimates Adaptive Sampling could dynamically adjusts the number of samples taken by each pixels based on the estimated variance of the rendered colors. This could help renderer to concentrate on high noisy area. The first time I met with adaptive sampling is on MAYA Arnold renderer, where we have a parameter of *max_sample* and *threshold*. My adaptive sampling has these key steps: 1. Initial Sampling (minimum sample times) during which, the variance are estimated using Welford's online algorithm. This keeps us away from storaging all sampled colors in history. 2. Adaptive Sampling Pixels with a higher variance than the threshold will be pushed in a second pass until the variance falls below the threshold or the max_num_sample has been reached. 3. Stats Output a sampling heatmap and related data will be generated to visualize the adaptive sampling process. Comparison: A: sampling all pixels 128 times B: sampling adaptive with a min 16, max 128, threshold 0.005 ``` # Do not use adaptive sampling Will save rendered image to "../../scenes/final_project/jensen_box_diffuse_mis-2024-11-25-21-52-54.png" Scene Rendering │█████████████████████████████████│ (38.785s) # use adaptive sampling with min-16, max-256, threshold 0.01 Will save rendered image to "../../scenes/final_project/jensen_box_diffuse_mis-2024-11-25-21-55-09.png" Scene Rendering │█████████████████████████████████│ (32.152s) Saved sampling heatmap to: sample_heatmap.png Adaptive sampling statistics: Average samples per pixel: 103.21349 Min samples used: 16 Max samples used: 128 ```![]()
![]()
![]()
![]()
On heat map, the red color means it has more sample counts, the blue color means it has lower one. They have nearly no difference after denoising, while the total rendered time is saved. We can also increase the max sample time and decrease the threshold to improve the quality of noisy area. 3.4 Integrate Intel's Open Image Denoise Beyond the basic NLM and NLM joint, Intel's Open Image Denoise Library is integrated to improve the final quality. Integration: CMakeLists.txt ``` set(OIDN_DIR "PATH TO OIDN") include_directories(${OIDN_DIR}/include) link_directories(${OIDN_DIR}/lib) target_include_directories(darts PRIVATE ${OIDN_DIR}/include) target_link_libraries(darts PRIVATE darts_lib ${OIDN_DIR}/lib/OpenImageDenoise.lib ${OIDN_DIR}/lib/OpenImageDenoise_core.lib ) ``` Code: src/inlcude/sampler.h/denoiseImage_Intel() Intel's Open Image Denoise supports input of feature buffers as well.![]()
(##) 5. Homogeneous Scattering Participating Media 1. Implemented homogeneous absorption and scattering effect with henyey_greenstein phase function. 2. Implemented Volumetric Path Tracer (using both NEE and MIS) 3. A medium is stored inside a material. (A dielectric material with medium could be easily implemented) 4. I have not implemented heterogeneous participating media. Here are a series of testing scene compared to blender Principled Volume Node. 1. Density = 1 with area light![]()
2. Density = 0.5 with area light![]()
3. Comparing with Volume Scattering Node in Blender (By setting total = color * density, set albedo to 1)![]()
4. Putting a medium inside another foggy medium![]()
![]()
![]()
![]()